Distribution-to-distribution (D2D) point cloud registration techniques such as the Normal Distributions Transform (NDT) can align point clouds sampled from unstructured scenes and provide accurate bounds of their own solution error covariance-- an important feature for safety-of life navigation tasks. D2D methods rely on the assumption of a static scene and are therefore susceptible to bias from range-shadowing, self-occlusion, moving objects, and distortion artifacts as the recording device moves between frames. Deep Learning-based approaches can achieve higher accuracy in dynamic scenes by relaxing these constraints, however, DNNs produce uninterpratable solutions which can be problematic from a safety perspective. In this paper, we propose a method of down-sampling LIDAR point clouds to exclude voxels that violate the assumption of a static scene and introduce error to the D2D scan matching process. Our approach uses a solution consistency filter, identifying and flagging voxels where D2D contributions disagree with local estimates from a PointNet-based registration network.
translated by 谷歌翻译
在本文中,我们提出了一种通过基于球形网格的预处理步骤来减轻激光扫描匹配中阴影错误的方法。由于网格与LiDAR束对齐,因此消除阴影边缘相对容易,从而导致LiDAR扫描匹配的系统错误。正如我们通过仿真所示,我们提出的算法比地面平面去除算法是最常见的减轻阴影策略。与拆除地面平面不同,我们的方法适用于任意地形(例如,城市墙壁上的阴影,丘陵地形的阴影),同时将钥匙雷达点保留在地面上,这对于估计高度,音高和滚动的变化至关重要。我们的预处理算法可以与一系列扫描匹配方法一起使用。但是,对于基于体素的扫描匹配方法,它通过降低计算成本和在体素之间更均匀分配激光点来提供额外的好处。
translated by 谷歌翻译
LIDAR数据可用于生成点云,用于导航自动驾驶汽车或移动机器人平台。扫描匹配是估计最能使两个点云的刚性转换的过程,是LiDAR探射仪的基础,这是一种死亡估算的形式。当没有GPS(例如GPS)(例如GPS)(例如GPS)时,LIDAR的探光仪特别有用。在这里,我们提出了迭代最接近的椭圆形变换(ICET),这是一种扫描匹配算法,可对当前最新的正常分布变换(NDT)进行两种新颖的改进。像NDT一样,ICET将激光雷达数据分解为体素,并将高斯分布拟合到每个体素内的点。 ICET的第一个创新通过沿着这些方向抑制溶液来降低沿着大型平坦表面的几何歧义。 ICET的第二个创新是推断与连续点云之间的位置和方向转换相关的输出误差协方差;当将ICET纳入诸如扩展的卡尔曼滤波器之类的状态估计例程中时,误差协方差特别有用。我们构建了一个模拟,以比较有或没有几何歧义的2D空间中ICET和NDT的性能,并发现ICET产生了出色的估计值,同时可以准确预测溶液的准确性。
translated by 谷歌翻译
预训练(PT),然后进行微调(FT)是培训神经网络的有效方法,并导致许多域中的显着性能改进。 PT可以包含各种设计选择,如任务和数据重新免除策略,增强政策和噪声模型,所有这些都可以显着影响所学到的陈述的质量。因此,必须适当地调整这些策略引入的超级参数。但是,设置这些超参数的值是具有挑战性的。大多数现有方法都努力缩放到高维度,太慢和内存密集,或者不能直接应用于两级PT和FT学习过程。在这项工作中,我们提出了一种基于渐变的梯度的算法,以Meta-Learn PT HyperParameters。我们将PT HyperParameter优化问题正式化,并提出了一种通过展开优化结合隐式分化和反向来获得PT超级参数梯度的新方法。我们展示了我们的方法可以提高两个真实域的预测性能。首先,我们优化高维任务加权超参数,用于多任务对蛋白质 - 蛋白质相互作用图进行培训,并将Auroc提高至3.9%。其次,我们在心电图数据上优化用于SIMCLR的SIMCLR的数据增强神经网络,并将Auroc提高到1.9%。
translated by 谷歌翻译
我们从第一批原则提供了一个理论分析,该原则在预训练和微调性能的关系归纳偏差之间建立了新的联系,同时提供了一般预训练模型的延长视图。我们进一步探讨了现有的预训练方法如何强加相关的归纳偏差,发现绝大多数现有方法几乎专注于以帧内方式建模的关系,而不是每种样本方式。我们建立了这些调查结果,这些发现与跨越3个数据模式和10个下游任务的标准基准测试。这些调查验证了我们的理论分析,并提供了一种方法,以产生新的预训练方法,该方法与现有的方法符合用户指定的关系图。
translated by 谷歌翻译
Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have dramatically improved performance for many natural language processing (NLP) tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on three common clinical NLP tasks as compared to nonspecific embeddings. These domainspecific models are not as performant on two clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text.
translated by 谷歌翻译
Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
Agile robotics presents a difficult challenge with robots moving at high speeds requiring precise and low-latency sensing and control. Creating agile motion that accomplishes the task at hand while being safe to execute is a key requirement for agile robots to gain human trust. This requires designing new approaches that are flexible and maintain knowledge over world constraints. In this paper, we consider the problem of building a flexible and adaptive controller for a challenging agile mobile manipulation task of hitting ground strokes on a wheelchair tennis robot. We propose and evaluate an extension to work done on learning striking behaviors using a probabilistic movement primitive (ProMP) framework by (1) demonstrating the safe execution of learned primitives on an agile mobile manipulator setup, and (2) proposing an online primitive refinement procedure that utilizes evaluative feedback from humans on the executed trajectories.
translated by 谷歌翻译